horizontal approach
Bridging the Global Divide in AI Regulation: A Proposal for a Contextual, Coherent, and Commensurable Framework
This paper examines the current landscape of AI regulations, highlighting the divergent approaches being taken, and proposes an alternative contextual, coherent, and commensurable (3C) framework. The EU, Canada, South Korea, and Brazil follow a horizontal or lateral approach that postulates the homogeneity of AI systems, seeks to identify common causes of harm, and demands uniform human interventions. In contrast, the U.K., Israel, Switzerland, Japan, and China have pursued a context-specific or modular approach, tailoring regulations to the specific use cases of AI systems. The U.S. is reevaluating its strategy, with growing support for controlling existential risks associated with AI. Addressing such fragmentation of AI regulations is crucial to ensure the interoperability of AI. The present degree of proportionality, granularity, and foreseeability of the EU AI Act is not sufficient to garner consensus. The context-specific approach holds greater promises but requires further development in terms of details, coherency, and commensurability. To strike a balance, this paper proposes a hybrid 3C framework. To ensure contextuality, the framework categorizes AI into distinct types based on their usage and interaction with humans: autonomous, allocative, punitive, cognitive, and generative AI. To ensure coherency, each category is assigned specific regulatory objectives: safety for autonomous AI; fairness and explainability for allocative AI; accuracy and explainability for punitive AI; accuracy, robustness, and privacy for cognitive AI; and the mitigation of infringement and misuse for generative AI. To ensure commensurability, the framework promotes the adoption of international industry standards that convert principles into quantifiable metrics. In doing so, the framework is expected to foster international collaboration and standardization without imposing excessive compliance costs.
- Asia > Middle East > Israel (0.34)
- Europe > United Kingdom (0.28)
- North America > Canada (0.24)
- (27 more...)
- Overview (1.00)
- Research Report > Experimental Study (0.46)
- Law > Statutes (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
- Transportation > Ground > Road (0.93)
- (2 more...)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (1.00)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.69)
Lessons From the World's Two Experiments in AI Governance - Carnegie Endowment for International Peace
Artificial intelligence (AI) is both omnipresent and conceptually slippery, making it notoriously hard to regulate. Fortunately for the rest of the world, two major experiments in the design of AI governance are currently playing out in Europe and China. The European Union (EU) is racing to pass its draft Artificial Intelligence Act, a sweeping piece of legislation intended to govern nearly all uses of AI. Meanwhile, China is rolling out a series of regulations targeting specific types of algorithms and AI capabilities. For the host of countries starting their own AI governance initiatives, learning from the successes and failures of these two initial efforts to govern AI will be crucial.
- North America > United States (1.00)
- Asia > China (0.75)
- Europe (0.55)
- Law (1.00)
- Information Technology (0.96)
- Government > Regional Government > North America Government > United States Government (0.95)
Serial Chain Hinge Support for Soft, Robust and Effective Grasp
Stuhne, Dario, Vuletic, Jelena, Car, Marsela, Orsag, Matko
Abstract-- This paper presents a serial chain hinge support, a rigid but flexible structure that improves the mechanical performance and robustness of soft-fingered grippers. Gravity can reduce the integrity of soft fingers in horizontal approach, resulting in lower maximum payload caused by a large deflection of fingers. To substantiate our claim we performed several experiments on payload and deflection of the SofIA gripper under both horizontal and vertical approach. In addition, we show that this reinforcement does not impede the original compliant behavior of the gripper, maintaining the original kinematic model functionality. Finally, we validated the improved SofIA gripper in agricultural and everyday activities.
Profiler: Profile-Based Model to Detect Phishing Emails
Shmalko, Mariya, Abuadbba, Alsharif, Gaire, Raj, Wu, Tingmin, Paik, Hye-Young, Nepal, Surya
Email phishing has become more prevalent and grows more sophisticated over time. To combat this rise, many machine learning (ML) algorithms for detecting phishing emails have been developed. However, due to the limited email data sets on which these algorithms train, they are not adept at recognising varied attacks and, thus, suffer from concept drift; attackers can introduce small changes in the statistical characteristics of their emails or websites to successfully bypass detection. Over time, a gap develops between the reported accuracy from literature and the algorithm's actual effectiveness in the real world. This realises itself in frequent false positive and false negative classifications. To this end, we propose a multidimensional risk assessment of emails to reduce the feasibility of an attacker adapting their email and avoiding detection. This horizontal approach to email phishing detection profiles an incoming email on its main features. We develop a risk assessment framework that includes three models which analyse an email's (1) threat level, (2) cognitive manipulation, and (3) email type, which we combine to return the final risk assessment score. The Profiler does not require large data sets to train on to be effective and its analysis of varied email features reduces the impact of concept drift. Our Profiler can be used in conjunction with ML approaches, to reduce their misclassifications or as a labeller for large email data sets in the training stage. We evaluate the efficacy of the Profiler against a machine learning ensemble using state-of-the-art ML algorithms on a data set of 9000 legitimate and 900 phishing emails from a large Australian research organisation. Our results indicate that the Profiler's mitigates the impact of concept drift, and delivers 30% less false positive and 25% less false negative email classifications over the ML ensemble's approach.
- Asia > Nepal (0.04)
- Oceania > Australia > New South Wales (0.04)
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
AI ethics - how do we put theory into practice when international approaches vary?
Many governments around the world have rightly put ethical development and deployment at the heart of their AI thinking. Core to this complex issue is a set of interconnected problems - AI systems that may automate societal problems, either due to a systemic lack of diversity in development teams, perhaps, or the use of training data that contains historic or structural biases. The design of systems may also be a factor. The result may be the algorithmic exclusion of individuals or groups because of their ethnicity, gender, sexuality, religion, or socioeconomic background. For example, facial recognition systems that misidentify black or Asian people because of a lack of relevant data; or CV-scanning applications that reject applicants from some postcodes/zip codes because, historically, human employers have actively excluded those jobseekers.
- North America > United States (0.29)
- Europe > United Kingdom (0.29)
- Oceania > New Zealand (0.05)
- Law (1.00)
- Government > Regional Government (0.47)